Inductive inference
This article is about the mathematical concept. For inductive inference in logic, see
Inductive reasoning.
Around 1960, Ray Solomonoff founded the theory of universal inductive inference, the theory of prediction based on observations; for example, predicting the next symbol based upon a given series of symbols. The only assumption is that the environment follows some unknown but computable probability distribution.
Fundamental ingredients of the theory are the concepts of algorithmic probability and Kolmogorov complexity. The universal prior probability of any prefix p of a computable sequence x is the sum of the probabilities of all programs (for a universal computer) that compute something starting with p. Given some p and any computable but unknown probability distribution from which x is sampled, the universal prior and Bayes' theorem can be used to predict the yet unseen parts of x in optimal fashion.
This is a mathematically formalized Occam's razor:[1][2][3][4][5] shorter (Kolmogorov complexity) computable theories have more weight when calculating the probability of the next observation, using all computable theories which perfectly describe previous observations. Marcus Hutter's universal artificial intelligence builds upon this to calculate the expected value of an action.
Another direction of inductive inference is based on E. Mark Gold's model of learning in the limit from 1967 and has developed since then more and more models of learning.[6] The general scenario is the following: Given a class S of computable functions, is there a learner (that is, recursive functional) which for any input of the form (f(0),f(1),...,f(n)) outputs a hypothesis (an index e with respect to a previously agreed on acceptable numbering of all computable functions; the indexed function should be consistent with the given values of f). A learner M learns a function f if almost all its hypotheses are the same index e, which generates the function f; M learns S if M learns every f in S. Basic results are that all recursively enumerable classes of functions are learnable while the class REC of all computable functions is not learnable. Many related models have been considered and also the learning of classes of recursively enumerable sets from positive data is a topic studied from Gold's pioneering paper in 1967 onwards. A far reaching extension of the Gold’s approach is developed by Schmidhuber's theory of generalized Kolmogorov complexities,[7] which are kinds of super-recursive algorithms.
See also
References
- Angluin, D., and Smith, C. H. (1983) Inductive Inference: Theory and Methods, Comput. Surveys, v. 15, no. 3, pp. 237—269
- Mark Burgin (2005), Super-recursive algorithms, Monographs in computer science, Springer. ISBN 0387955690
- Burgin, M. and Klinger, A. Experience, Generations, and Limits in Machine Learning, Theoretical Computer Science, v. 317, No. 1/3, 2004, pp. 71-91
- Gasarch, W. and Smith, C. H. (1997) A survey of inductive inference with an emphasis on queries. Complexity, logic, and recursion theory, Lecture Notes in Pure and Appl. Math., 187, Dekker, New York, pp. 225-260.
- Sanjay Jain, Daniel Osherson, James Royer and Arun Sharma, Systems that Learn: An Introduction to Learning Theory (second edition), The MIT Press, Cambridge, Massachusetts, 1999.
- Ming Li and Paul Vitanyi, An Introduction to Kolmogorov Complexity and Its Applications, 2nd Edition, Springer Verlag, 1997.
- Daniel Osherson, Michael Stob and Scott Weinstein, Systems That Learn, An Introduction to Learning Theory for Cognitive and Computer Scientists, Bradford - The MIT Press, Cambridge, Massachusetts, 1986.
- Ray Solomonoff "Two Kinds of Probabilistic Induction," The Computer Journal, Vol. 42, No. 4, 1999
- Ray Solomonoff "A Formal Theory of Inductive Inference, Part I" Information and Control, Part I: Vol 7, No. 1, pp. 1-22, March 1964
- Ray Solomonoff "A Formal Theory of Inductive Inference, Part II" Information and Control, Part II: Vol. 7, No. 2, pp. 224-254, June 1964
- Hay, Nick. "Universal Semimeasures: An Introduction," CDMTCS Research Report Series, University of Auckland, Feb. 2007.
- Numbered references:
- ^ Induction: From Kolmogorov and Solomonoff to De Finetti and Back to Kolmogorov JJ McCall - Metroeconomica, 2004 - Wiley Online Library.
- ^ Foundations of Occam's razor and parsimony in learning from ricoh.comD Stork - NIPS 2001 Workshop, 2001
- ^ Occam's razor as a formal basis for a physical theory from arxiv.orgAN Soklakov - Foundations of Physics Letters, 2002 - Springer
- ^ Beyond the Turing Test from uclm.es J HERNANDEZ-ORALLO - Journal of Logic, Language, and …, 2000 - dsi.uclm.es
- ^ On the existence and convergence of computable universal priors from arxiv.org M Hutter - Algorithmic Learning Theory, 2003 - Springer
- ^ E. Mark Gold. Language identification in the limit. Information and Control 10:447 - 474, 1967.
- ^ J. Schmidhuber (2002). "Hierarchies of generalized Kolmogorov complexities and nonenumerable universal measures computable in the limit". International Journal of Foundations of Computer Science 13 (4): 587–612. doi:10.1142/S0129054102001291. http://www.idsia.ch/~juergen/kolmogorov.html.